Learning Efficient Sparse and Low Rank Models

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Efficient Structured Sparse Models

We present a comprehensive framework for structured sparse coding and modeling extending the recent ideas of using learnable fast regressors to approximate exact sparse codes. For this purpose, we propose an efficient feed forward architecture derived from the iteration of the block-coordinate algorithm. This architecture approximates the exact structured sparse codes with a fraction of the com...

متن کامل

Sparse and Low Rank Recovery

Compressive sensing (sparse recovery) is a new area in mathematical image and signal processing that predicts that sparse signals can be recovered from what was previously believed to be highly incomplete measurement [3, 5, 7, 12]. Recently, the ideas of this field have been extended to the recovery of low rank matrices from undersampled information [6, 8]; most notably to the matrix completion...

متن کامل

Topics in learning sparse and low-rank models of non-negative data

Advances in information and measurement technology have led to a surge in prevalence of high-dimensional data. Sparse and low-rank modeling can both be seen as techniques of dimensionality reduction, which is essential for obtaining compact and interpretable representations of such data. In this thesis, we investigate aspects of sparse and low-rank modeling in conjunction with non-negative data...

متن کامل

Unifying Low-Rank Models for Visual Learning

Many problems in signal processing, machine learning and computer vision can be solved by learning low rank models from data. In computer vision, problems such as rigid structure from motion have been formulated as an optimization over subspaces with fixed rank. These hard -rank constraints have traditionally been imposed by a factorization that parameterizes subspaces as a product of two matri...

متن کامل

Greedy Learning of Generalized Low-Rank Models

Learning of low-rank matrices is fundamental to many machine learning applications. A state-ofthe-art algorithm is the rank-one matrix pursuit (R1MP). However, it can only be used in matrix completion problems with the square loss. In this paper, we develop a more flexible greedy algorithm for generalized low-rank models whose optimization objective can be smooth or nonsmooth, general convex or...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Pattern Analysis and Machine Intelligence

سال: 2015

ISSN: 0162-8828,2160-9292

DOI: 10.1109/tpami.2015.2392779